Creating a test strategy for asynchronous microservices applications
If you want to investigate further some of the concepts presented in this article, I have created a practical implementation of it. It was implemented using Java and some correspondent frameworks. You can find it in my Github: https://github.com/teixeira-fernando/EcommerceApp
Microservices architecture is a common architecture pattern that you can find in applications from different companies around the world. There are still many ways to implement this pattern, including several kinds of technologies and components. One of those implementation possibilities includes some message queues, like for example, Apache Kafka and RabbitMQ, that brings an asynchronous behavior to those microservices. This approach has some pros and cons, but in this article, I will focus more on the quality and testing aspects of this option.
Many testers are used to test synchronous microservices and APIs. They have a good knowledge of what they need to validate, some tools that they can use to assist them, and how to automate those tests. On the internet, there are many different tutorials and content with instructions on how to do it. The main challenge comes when there is a message queue in this architecture, where I noticed that many QA professionals have difficulties with these asynchronous microservices because it is more difficult for them to validate the behavior of these applications using the same techniques that they are used to.
In the same way that the development of applications must evolve, the test strategies for those applications must evolve too. This approach can be applied with different programming languages and different frameworks, so, let’s try to focus on the concepts of this test strategy, that is the most important point here. With that in mind, I will present to you an approach that worked quite well in some of my previous experiences and personal study projects.
The image below represents a very simple architecture, and what each kind of test is intended to cover in this architecture:
You can see that we have 2 microservices in this architecture and both connect to the same Kafka Queue. Also, we have an API gateway, which is an interface for the external consumer, to communicate with microservice 1. So, just for a simpler understanding, we will consider that microservice 1 exposes an HTTP endpoint for a functionality that can be called by this external consumer. This same microservice 1 will trigger an event to the Kafka Queue that will be processed by microservice 2.
Now, let’s deep dive into the details of the strategy behind each one of the tests:
- Unit Tests: This is a kind of test technically always present in any test strategy. It is really fast to execute, cheap to create, maintain, and can cover many different points of your application; It supports us to guarantee the behavior of some internal layers of our microservice, like controller, service, repository, etc.
- Integration Tests: Here we have mainly 2 important responsibilities for those tests. It will be applicable in both microservices contexts of the architecture above:
Some integration tests will validate the integration with the Database. In this test, we can try to use an in-memory database for isolated and independent execution. In most cases, it’s interesting to include all the MVC layers of the functionality here, to guarantee the integration of all the internal layers too;
The other integration tests will assist us to verify the connection with the Kafka queue. When the microservice is a producer, we can validate that it is correctly publishing events into this Kafka Queue. And when the microservice is a consumer, we can test that it can pull events from the queue and process them. In both cases, it is interesting to try to use Embedded/In-memory Kafka libraries to make those tests isolated and independent, also helping to avoid some possible flakiness.
- Contract Tests: It has been increasing its popularity a lot over the last few years. It can help a lot to deal with interdependence issues between microservices. In our context here, it validates if the event contract that the microservice 1 is sending, it’s the same that the microservice 2 is expecting. The strategy includes a centralized Pact Broker to store those contracts. We can include this test in our CI pipeline, to validate this contract constantly wherever there is any change in the microservices.
- E2E tests: One time that we have all the other tests covering many different components of the architecture, it will be necessary just a few E2E tests to make some final checks of this architecture (this is an important point because this is the most costly type of test on our strategy, and we want to make it as much efficient as possible). The E2E tests will cover this external communication with our application, where it includes the API gateway that makes this interface. It is important to highlight that microservice 2 is not covered in the E2E tests because we cannot check his behavior of consuming the Kafka message. Because of that, it is even more important to have the other kinds of tests to cover it.
Depending on the infrastructure that your application is deployed, you can add other kinds of tests. For example, let’s say that everything is deployed on AWS using Infrastructure as a code with Terraform. You could also think about including tests for Terraform, which will help to guarantee that everything in your infrastructure is correctly configured to support your application. It could help a lot to anticipate some issues.